Imagine a society like ours, but with a moral norm against ever using a right hand to hurt anyone. They kill, rape, torture, and so on, but always with their left hand, never with their right. They are proud to live in a civilized society, and are disgusted by barbaric societies where right-handed harm is common. Their disgust sometimes makes them war against barbarians, to civilize them. But even in war they are careful to show their moral superiority by only killing with their left hands. Are these people as moral as they believe?
James Miller’s recent post on replacing prison with torture suggested to me this allegory of kind right-handers. Most of us are apparently very proud of our moral norm against torture, even though we allow ourselves to impose great harms in other ways. We tend to be disgusted by Muslim torture practices, encouraging us to go to war against such "barbarians." But I fear such moral norms do more to help us feel superior than to reduce the total amount of harm.
So who wants to start a crusade against right-handed harm?
Henry V: "Why is there a presumption that "reducing total harm" is the goal?"
itchy: "Uh, perhaps because without this presumption, there isn't a crime in the first place."
I think you've implicitly equated "reduction of total harm is not the goal" and "harm is not a bad thing." There may be other goals that (in some cases) conflict with reduction of total harm. To suggest that reduction of total harm is the goal, then one must suggest why this normative value exists. From what moral authority did it generate?
Henry V: In my opinion, any morality in the end makes an appeal to a higher moral authority.
itchy: Higher than what? Yes, any morality makes presumptions. And, without any morality, there is no reason to hold anyone responsible for any action, so this discussion would be moot.
Higher than the one making the claim, I imagine. I'm not saying I'm opposed to this line of reasoning. But, I am saying that it should be explicit rather than implicit in anyone's argument, including Robin's. I'm curious as to what Robin's source of morality is. Are certain things right and certain things wrong, or not?
Henry V wrote:
Why is there a presumption that "reducing total harm" is the goal (the goalof whom? society as a whole? individuals?)?
And, how would this be quantified when individuals have different preferencesover different forms of harm (corporal punishment, imprisonment, poverty,etc.)?
In my opinion, any morality in the end makes an appeal to a highermoral authority.
I think these are excellent questions that must be answered before we can come to any real agreement on moral issues. I think the reason we have such problem coming to an agreement on these issues, is because it's the operation of the human brain that is the source of all this confusion. And since we don't have agreement in our society on what the brain is doing, we can't agree on a workable foundation to answer these most important of questions.
I will however give you what I believe to be the the answer to what the brain is doing, and how it leads to answers to these sorts of questions. I'm interested in these topics because of my interest in creating AI. I can't create AI until I can answer the question of what the brain is doing. My current best guess as to what the brain does, and how it works, however leads to many very interesting potential answers to social questions such a morality issues.
I believe that the part of the brain responsible for the production of all our voluntary behavior is just a reinforcement learning machine. As such, the goal of such a machine is simply to try and maximise all future rewards, as defined by a genetically created definition of good and bad. Good for us are all those low level things we are genetically predisposed to be attracted to (pleasure), and bad are the things we are wired to avoid (the stuff we call pain). As such the purpose of the brain is not actually survival. It's direct purpose is simply the avoidance of pain, and the production of pleasure. Indirectly this produces an increased odds of survival in us only because of what evolution has hard-wired into us as prime measures of pain and pleasure.
This distinction is important because it changes our understand of our purpose in life. Our genes what us to help them survive (using Dawkins' view), but the purpose of our brain, is not the purpose of the entire body. And what we normally call, "our purpose" is in fact the purpose of the brain, because my foot is not what is writing this message to you, it's my brain. I am, a brain talking to you, and my purpose, is simply to maximise my own long term pleasure and minimize my long term pain.
This makes it easy to understand why we choose to use birth control. It's because our purpose is not to reproduce. It's to produce long term pleasure. Birth control is one way to increase our long term pleasure. We get all the pleasure of having sex, without the pain of producing a child when we don't want to produce a child. Our selfish genes might not be "happy" with this, but that's their problem, not ours. They are the ones that wired us to like sex, and wired us to be a pleasure seeking machine. They did it because for the most part, it greatly increases their odds of survival. But our purpose is not survival, it's pleasure.
In the long term, if our use of birth control reduces our odds of survival, evolution and natural selection will re-design humans in some way to make future humans do a better job of reproducing. But that is not our problem. Our problem, as given to us by the way evolution designed us, is to just do whatever we can, to be happy, until the day we die.
So with that foundation, lets go back to Henry's question and see what type of answers we can produce.
Why is there a presumption that "reducing total harm" is the goal (the goalof whom? society as a whole? individuals?)?
From my perspective, we are brains built for the purpose of minimizing total pain - which because of how we are wired by evolution, is a close match to reducing total harm. That is, most things that harm us, cause us pain, and we are machines built to avoid pain. But we are built for the most part, only to reduce harm to ourselves. However, it's easy to see that for anything in our environment that acts as a tool for helping us reduce our own harm, that we should also do what we can to reduce harm to it. I don't want my car to be harmed, because my car helps me prevent future pain. And for the same reason, I don't want other people to be harmed, because for the most part, other people help me reduce harm to myself as well. So it's easy to generalize from our prime goal of reducing personal pain, to a general rule of reducing total pain to all humans.
And, how would this be quantified when individuals have different preferencesover different forms of harm (corporal punishment, imprisonment, poverty,etc.)?
That's very hard to do. The brain, based on how it actually operates, has to quantify everything. If we could accurately scan a person's brain, we could measure actual levels of pain associated with different experiences and use that as a starting point for making social decisions. But as you say, everyone in the society will have a different level of pain for different events. And we aren't only talking about simple physical pain of someone being tortured. We are talking about the indirect forms of pain that we all feel, if we simply know that someone is being tortured. So when we harm someone, such as by torture, we are not only creating that pain in them, but we are also creating pain in everyone in the society who feels pain simply at knowing they have allowed someone to be tortured.
When we outlaw torture, it's not really an issue of how much pain the person being tortured is feeling. It's our own selfish need to not feel bad about letting it happen to them. We outlaw torture more because of the fact it causes us pain, than because of the pain felt by the individual. Part of that pain of course is the fear that we might find ourselves on the receiving end of that torture some day. The harm caused to one person being tortured is not nearly as bad as the total pain felt by 300 million people knowing they have allowed the guy to be tortured.
In my opinion, any morality in the end makes an appeal to a highermoral authority.
Yes, but I think that higher morality stops at the brain. The bottom line for me, is that I only care about cause me pain and pleasure, and that's defined by the physical operation of my brain. But it so happens, that seeing other people in pain, especially the people that mean a lot to me, causes me a lot of pain. So, to reduce my pain, I need to reduce their pain, and this need extends out far and wide to not only humans, but to some extent animals, and plants, and bugs, and the environment. The people and things that most directly effect my future I care the most about, but I care to some degree about just about everything on the planet because I understand that things that happen to even the rocks on the planet, might come back to cause me pain one day.
The foundation of my morality (aka what I care about and what I sense as good and bad) was built into me by evolution. I don't need to appeal to a higher morality than my own brain because my own brain and how it's wired is the source, and the root definition of what is good and what is bad in the universe to me. There is no higher authority than that for me to appeal to when I look for what is right, and what is wrong.
So, you see, by understanding what the brain is doing, I believe we can answer these questions, which otherwise, have no foundation of understanding. And though not everyone agrees with my foundation, I think you can see that when we figure out what the brain is doing, we will have a foundation of what morality is, and why humans see some things as bad, and some as good. If we understand that foundation, we should be able to make better social decisions. And a key one I see, is this idea of replacing the scientific, and evolution inspired goal of survival, with the better evolution, and scientific inspired goal of maximizing happiness. Evolution built humans as survival machines, but it built brains to be pleasure maximising machines. And as a collection of brains talking to each other, and trying to figure out what morals are, we should understand that our prime purpose is maximising happiness, and not survival. And as such, we need to pay more attention to the total happiness of all living humans, and worry less about our struggle for survival. Dieing isn't bad as long as it doesn't come with pain. If we could learn to tap into our pleasure centers, and stimulate ourselves with pure pleasure, that would be an ideal way to die. It would be like going to heaven - a place of pure pleasure and no pain. When we have to die, that would be the way to go. And if we as a society decide we need to kill people, that would be by far the most humane way to do it. We kill them by sending them to heaven - by removing all pain from their life - until the point their body stops working and they die. That's just one example of how a better understanding of what the brain is doing might create some big changes in how we view things like capital punishment. I think that simply understanding that humans are reinforcement learning machines is enough to explain what the true foundation of all our morals are, and a good enough foundation to allow us to make better social decisions. But before this can be useful, we need more people to understand what this means, and so far, I had no end of problems finding people to agree with the idea that humans are reinforcement learning machines. They are too biased in their beliefs that humans are something more complex than this to be able to accept this sort of answer - or even give it serious consideration.